Emotion Recognition from Decision Level Fusion of Visual and Acoustic Features using Hausdorff Classifier
نویسنده
چکیده
The emotions are generally measured by analyzing either head movement patterns or eyelid movements or face expressions or all the lasts together. Concerning emotion recognition visual sensing of face expressions is helpful but generally not always sufficient. Therefore, one needs additional information that can be collected in a non-intrusive manner in order to increase the robustness. We find acoustic information to be appropriate, provided the human generates some vocal signals by speaking, shouting, crying, etc. In this paper, appropriate visual and acoustic features of the driver are identified based on the experimental analysis. For visual and acoustic features, Linear Discriminant Analysis (LDA) technique is used for dimensionality reduction and Hausdorff distance is used for emotion classification. The performance is evaluated by using the Vera am Mittag (VAM) emotional recognition database. We propose a decision level fusion technique, to fuse the combination of visual sensing of face expressions and pattern recognition from voice. The result of the proposed approach is evaluated over the VAM database with various conditions.
منابع مشابه
Decision Level Fusion of Visual and Acoustic Features of the Driver for Real-time Driver Monitoring System
Poor attention of drivers towards driving can cause accidents that can harm the driver or surrounding people. The poor attention is not only caused by the drowsiness of the driver but also due to the various emotions/moods (for example sad, angry, joy, pleasure, despair and irritation) of the driver. The emotions are generally measured by analyzing either head movement patterns or eyelid moveme...
متن کاملComparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speech
— The automatic analysis of speech to detect affective states may improve the way users interact with electronic devices. However, the analysis only at the acoustic level could be not enough to determine the emotion of a user in a realistic scenario. In this paper we analyzed the spontaneous speech recordings of the FAU Aibo Corpus at the acoustic and linguistic levels to extract two sets of fe...
متن کاملFusion Framework for Emotional Electrocardiogram and Galvanic Skin Response Recognition: Applying Wavelet Transform
Introduction To extract and combine information from different modalities, fusion techniques are commonly applied to promote system performance. In this study, we aimed to examine the effectiveness of fusion techniques in emotion recognition. Materials and Methods Electrocardiogram (ECG) and galvanic skin responses (GSR) of 11 healthy female students (mean age: 22.73±1.68 years) were collected ...
متن کاملUrban Vegetation Recognition Based on the Decision Level Fusion of Hyperspectral and Lidar Data
Introduction: Information about vegetation cover and their health has always been interesting to ecologists due to its importance in terms of habitat, energy production and other important characteristics of plants on the earth planet. Nowadays, developments in remote sensing technologies caused more remotely sensed data accessible to researchers. The combination of these data improves the obje...
متن کاملBimodal Emotion Recognition from Speech and Text
This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-seven acoustic features are extracted from the speech input. Two different classifiers Support Vector Machines (SVMs) and BP neural network are adopted to classify the emotional states. In text analysis, we use the two-step classification method to recognize ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011